<?xml version="1.0" encoding="ISO-8859-1"?>
<metadatalist>
	<metadata ReferenceType="Conference Proceedings">
		<site>sibgrapi.sid.inpe.br 802</site>
		<holdercode>{ibi 8JMKD3MGPEW34M/46T9EHH}</holdercode>
		<identifier>8JMKD3MGPEW34M/45DAPHE</identifier>
		<repository>sid.inpe.br/sibgrapi/2021/09.08.22.59</repository>
		<lastupdate>2021:09.08.22.59.27 sid.inpe.br/banon/2001/03.30.15.38 administrator</lastupdate>
		<metadatarepository>sid.inpe.br/sibgrapi/2021/09.08.22.59.27</metadatarepository>
		<metadatalastupdate>2022:09.10.00.16.17 sid.inpe.br/banon/2001/03.30.15.38 administrator {D 2021}</metadatalastupdate>
		<citationkey>FerreiraMartNasc:2021:SyReHu</citationkey>
		<title>Synthesizing realistic human dance motions conditioned by musical data using graph convolutional networks</title>
		<format>On-line</format>
		<year>2021</year>
		<numberoffiles>1</numberoffiles>
		<size>16203 KiB</size>
		<author>Ferreira, João Pedro Moreira,</author>
		<author>Martins, Renato,</author>
		<author>Nascimento, Erickson Rangel,</author>
		<affiliation>Universidade Federal de Minas Gerais</affiliation>
		<affiliation>Université Bourgogne Franche-Comté</affiliation>
		<affiliation>Universidade Federal de Minas Gerais</affiliation>
		<editor>Paiva, Afonso,</editor>
		<editor>Menotti, David,</editor>
		<editor>Baranoski, Gladimir V. G.,</editor>
		<editor>Proença, Hugo Pedro,</editor>
		<editor>Junior, Antonio Lopes Apolinario,</editor>
		<editor>Papa, João Paulo,</editor>
		<editor>Pagliosa, Paulo,</editor>
		<editor>dos Santos, Thiago Oliveira,</editor>
		<editor>e Sá, Asla Medeiros,</editor>
		<editor>da Silveira, Thiago Lopes Trugillo,</editor>
		<editor>Brazil, Emilio Vital,</editor>
		<editor>Ponti, Moacir A.,</editor>
		<editor>Fernandes, Leandro A. F.,</editor>
		<editor>Avila, Sandra,</editor>
		<e-mailaddress>joaopmoferreira@gmail.com</e-mailaddress>
		<conferencename>Conference on Graphics, Patterns and Images, 34 (SIBGRAPI)</conferencename>
		<conferencelocation>Gramado, RS, Brazil (virtual)</conferencelocation>
		<date>18-22 Oct. 2021</date>
		<publisher>Sociedade Brasileira de Computação</publisher>
		<publisheraddress>Porto Alegre</publisheraddress>
		<booktitle>Proceedings</booktitle>
		<tertiarytype>Master's or Doctoral Work</tertiarytype>
		<transferableflag>1</transferableflag>
		<keywords>Human motion generation, sound and dance processing, multimodal learning, conditional adversarial nets, graph convolutional neural networks.</keywords>
		<abstract>Learning to move naturally from music, i.e., to dance, is one of the most complex motions humans often perform effortlessly. Synthesizing human motion through learning techniques is becoming an increasingly popular approach to alleviating the requirement of new data capture to produce animations. Most approaches, addressing the problem of automatic dance motion synthesis with classical convolutional and recursive neural models, undergo training and variability issues due to the non-Euclidean geometry of the motion manifold structure. In this thesis, we design a novel method based on graph convolutional networks, that overcome the aforementioned issues, to tackle the problem of automatic dance generation from audio information. Our method uses an adversarial learning scheme conditioned on the input music audios to create natural motions preserving the key movements of different music styles. We also collected, annotated and made publicly available a novel multimodal dataset with paired audio, motion data and videos of people dancing three different music styles, as a common ground to evaluate dance generation approaches. The results suggest that the proposed GCN model outperforms the state-of-the-art dance generation method conditioned on music in different experiments. Moreover, our graph-convolutional approach is simpler, easier to be trained, and capable of generating more realistic motion styles regarding qualitative and different quantitative metrics. It also presents a visual movement perceptual quality comparable to real motion data. The dataset, source code, and qualitative results are available on the project's webpage: https://verlab.github.io/Learning2Dance_CAG_2020/.</abstract>
		<language>en</language>
		<targetfile>wtd-sibgrapi-joao.pdf</targetfile>
		<usergroup>joaopmoferreira@gmail.com</usergroup>
		<visibility>shown</visibility>
		<mirrorrepository>sid.inpe.br/banon/2001/03.30.15.38.24</mirrorrepository>
		<nexthigherunit>8JMKD3MGPEW34M/45PQ3RS</nexthigherunit>
		<citingitemlist>sid.inpe.br/sibgrapi/2021/11.12.11.46 8</citingitemlist>
		<hostcollection>sid.inpe.br/banon/2001/03.30.15.38</hostcollection>
		<agreement>agreement.html .htaccess .htaccess2</agreement>
		<lasthostcollection>sid.inpe.br/banon/2001/03.30.15.38</lasthostcollection>
		<url>http://sibgrapi.sid.inpe.br/rep-/sid.inpe.br/sibgrapi/2021/09.08.22.59</url>
	</metadata>
</metadatalist>